skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Holmes, George L"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Advancements in robotics and AI have increased the demand for interactive robots in healthcare and assistive applications. However, ensuring safe and effective physical human-robot interactions (pHRIs) remains challenging due to the complexities of human motor communication and intent recognition. Traditional physics-based models struggle to capture the dynamic nature of human force interactions, limiting robotic adaptability. To address these limitations, neural networks (NNs) have been explored for force-movement intention prediction. While multi-layer perceptron (MLP) networks show potential, they struggle with temporal dependencies and generalization. Long Short-Term Memory (LSTM) networks effectively model sequential dependencies, while Convolutional Neural Networks (CNNs) enhance spatial feature extraction from human force data. Building on these strengths, this study introduces a hybrid LSTM-CNN framework to improve force-movement intention prediction, increasing accuracy from 69% to 86% through effective denoising and advanced architectures. The combined CNN-LSTM network proved particularly effective in handling individualized force-velocity relationships and presents a generalizable model paving the way for more adaptive strategies in robot guidance. These findings highlight the importance of integrating spatial and temporal modeling to enhance robot precision, responsiveness, and human-robot collaboration. Index Terms —- Physical Human-Robot Interaction, Intention Detection, Machine Learning, Long-Short Term Memory (LSTM) 
    more » « less
    Free, publicly-accessible full text available August 18, 2026
  2. This work challenges the common assumption in physical human-robot interaction (pHRI) that the movement intention of a human user can be simply modeled with dynamic equations relating forces to movements, regardless of the user. Studies in physical human-human interaction (pHHI) suggest that interaction forces carry sophisticated information that reveals motor skills and roles in the partnership and even promotes adaptation and motor learning. In this view, simple force-displacement equations often used in pHRI studies may not be sufficient. To test this, this work measured and analyzed the interaction forces (F) between two humans as the leader guided the blindfolded follower on a randomly chosen path. The actual trajectory of the follower was transformed to the velocity commands (V) that would allow a hypothetical robot follower to track the same trajectory. Then, possible analytical relationships between F and V were obtained using neural network training. Results suggest that while F helps predict V, the relationship is not straightforward, that seemingly irrelevant components of F may be important, that force-velocity relationships are unique to each human follower, and that human neural control of movement may affect the prediction of the movement intent. It is suggested that user-specific, stereotype-free controllers may more accurately decode human intent in pHRI. 
    more » « less